TORQUE command | LL command | |
---|---|---|
Queue | #PBS -q [queue] | #@ class=[queue] |
Nodes | #PBS -l nodes=[#] | #@ node=[#] |
Processors | #PBS -l ppn=[#] | #@ tasks_per_node=[#] |
Wall clock limit | #PBS -l walltime=[hh:mm:ss] | #@ wall_clock_limit=[hh:mm:ss] |
Standard output file | #PBS -o [file] | #@ output=[file] |
Standard error | #PBS -e [file] | #@ error=[file] |
Copy environment | #PBS -V | #@ environment=COPY_ALL |
Notification event | #PBS -m abe | #@ notification=start|error|complete|never|always |
Email address | #PBS -M [email] | #@ notify_user=[email] |
Job name | #PBS -N [name] | #@ job_name=[name] |
Job restart | #PBS -r [y|n] | #@ restart=[yes|no] |
Job type | n/a | #@ job_type=[type] |
Initial directory | n/a | #@ initialdir=[directory] |
Node usage | #PBS -l naccesspolicy=singlejob | #@ node_usage=not_shared |
Memory requirement | #PBS -l mem=XXXXmb | #@ requirements=(Memory >= NumMegaBytes) |
# @ shell =/client/bin/ksh
Specifies the shell to be used. Default is your login shell.
# @ job_type = parallel / serial
Specifies that your job is parallel / serial.
# @ job_name = myprog.job
Assigns the job name to the request.
# @ initialdir =
Chooses the directory the job should start in.
# @ output =
Specifies the name and location of standard output.
# @ error =
Specifies the name and location of standard error output.
# @ notification = complete
Specifies that an email should be sent on job completion, other possible options: error, start, never
# @ tasks_per_node = <#MPI-tasks>
Specifies the number of MPI processes for the job. The value <#MPI-tasks> can not be greater than 64 because one Power6 node has 32 processors with 2 hardware threads each, thus 64 logical cpus in "Simultaneous Multithreading" (SMT) mode. If you are using OpenMP only, you have to set this variable to 1.
# @ resources = ConsumableMemory(750mb)
Specifies the amount of main memory per MPI task. Two kinds of nodes are available. The useable memory is 48000MB on small nodes (p081-p240) and 96000MB on large nodes (p001-p080). The maximal values of ConsumableMemory can be calculated as
# @ resources = ConsumableCpus(#OpenMP-threads>)
Specifies the number of threads if you are running pure OpenMP application.
# @ task_affinity = core(<#OpenMP-threads>)
Specifies the number of OpenMP threads for hybrid applications, if the product (#MPI-tasks * #OpenMP-threads) is less or equal 32 (ST mode).For pure MPI programs in ST mode set task_affinity=core(1).
# @ task_affinity = cpu(<#OpenMP-threads>)
Specifies the number of OpenMP threads for hybrid applications, if the product (#MPI-tasks * #OpenMP-threads) is larger than 32 (SMT mode).This expression may not exceed 64. For pure MPI applications in SMTmode set task_affinity=cpu(1).
# @ wall_clock_limit = 00:10:00
Specifies that your job requires HH:MM:SS of wall clock time.
# @ node_usage = not_shared
Reserves nodes for exclusive usage, other possible option: shared.
# @ network.MPI = sn_all,not_shared,us
Use all infiniband adapters in fast userspace mode.
# @ account_no =
Specifies which account you want to use for this job, in case you are subscribed to more than one project. If you do not use the acoount_no keyword in your jobscript, the computer time consumption is charged toyour default account stored in $HOME/.acct
# @ rset = rset_mcm_affinity
Defines resource sets for the task binding on CPUs. Do not set this parameter in case you make task binding on your own. Explicit rset does not work at some other sites (e.g. ECMWF).
# @ core_limit = 0
Specifies the hard limit, soft limit, or both limits for the size of a core file. This limit is a per process limit. If you want no core file(s), set core_limit=0. Other options are unlimited or a limit in kb or mb.
# @ queue
Statement is mandatory and marks the end of your keyword definitions.
Copyright © 2014 - All Rights Reserved - Domain Name
Template by OS Templates